14 research outputs found

    How a human rights perspective could complement the EU’s AI Act

    Get PDF
    The European Commission has proposed an AI Act for regulating artificial intelligence technologies. Daria Onitiu argues that adopting a human rights perspective would allow the proposed framework to better protect the safety, autonomy, and dignity of citizens

    Algorithmic abstractions of ‘fashion identity’ and the role of privacy with regard to algorithmic personalisation systems in the fashion domain

    Get PDF
    This paper delves into the nuances of ‘fashion’ in recommender systems and social media analytics, which shape and define an individual’s perception and self-relationality. Its aim is twofold: first, it supports a different perspective on privacy that focuses on the individual’s process of identity construction considering the social and personal aspects of ‘fashion’. Second, it underlines the limitations of computational models in capturing the diverse meaning of ‘fashion’, whereby the algorithmic prediction of user preferences is based on individual conscious and unconscious associations with fashion identity. I test both of these claims in the context of current concerns over the impact of algorithmic personalisation systems on individual autonomy and privacy: creating ‘filter bubbles’, nudging the user beyond their conscious awareness, as well as the inherent bias in algorithmic decision-making. We need an understanding of privacy that sustains the inherent reduction of fashion identity to literal attributes and protects individual autonomy in shaping algorithmic approximations of the self

    Deconstructing the right to privacy considering the impact of fashion recommender systems on an individual’s autonomy and identity

    Get PDF
    Computing ‘fashion’ into a system of algorithms that personalise an individual’s shopping journey is not without risks to the way we express, assess, and develop aspects of our identity. This study uses an interdisciplinary research approach to examine how an individual’s interaction with algorithms in the fashion domain shapes our understanding of an individual’s privacy, autonomy, and identity. Using fashion theory and psychology, I make two contributions to the meaning of privacy to protect notions of identity and autonomy, and develop a more nuanced perspective on this concept using ‘fashion identity’. One, a more varied outlook on privacy allows us to examine how algorithmic constructions impose inherent reductions on individual sense-making in developing and reinventing personal fashion choices. A “right to not be reduced” allows us to focus on the individual’s practice of identity and choice with regard to the algorithmic entities incorporating imperfect semblances on the personal and social aspects of fashion. Second, I submit that we need a new perspective on the right to privacy to address the risks of algorithmic personalisation systems in fashion. There are gaps in the law regarding capturing the impact of algorithmic personalisation systems on an individual’s inference of knowledge about fashion, as well as the associations of fashion applied to individual circumstances. Focusing on the case law of the European Court of Human Rights (ECtHR) and the General Data Protection Regulation (GDPR), as well as aspects of EU non-discrimination and consumer law, I underline that we need to develop a proactive approach to the right to privacy entailing the incorporation of new values. I define these values to include an individual’s perception and self-relationality, describing the impact of algorithmic personalisation systems on an individual’s inference of knowledge about fashion, as well as the associations of fashion applied to individual circumstances. The study concludes with recommendations regarding the use of AI techniques in fashion using an international human rights approach. I argue that the “right to not be reduced” requires new interpretative guidance informing international human rights standards, including Article 17 of the International Covenant on Civil and Political Rights (ICCPR). Moreover, I consider that the “right to not be reduced” requires us to consider novel choices that inform the design and deployment of algorithmic personalisation systems in fashion, considering the UN Guiding Principles on Business and Human Rights and the EU Commission’s Proposal for an AI Act

    Up-to-the-Minute Privacy Policies via Gossips in Participatory Epidemiological Studies

    Get PDF
    Researchers and researched populations are actively involved in participatory epidemiology. Such studies collect many details about an individual. Recent developments in statistical inferences can lead to sensitive information leaks from seemingly insensitive data about individuals. Typical safeguarding mechanisms are vetted by ethics committees; however, the attack models are constantly evolving. Newly discovered threats, change in applicable laws or an individual's perception can raise concerns that affect the study. Addressing these concerns is imperative to maintain trust with the researched population. We are implementing Lohpi: an infrastructure for building accountability in data processing for participatory epidemiology. We address the challenge of data-ownership by allowing institutions to host data on their managed servers while being part of Lohpi. We update data access policies using gossips. We present Lohpi as a novel architecture for research data processing and evaluate the dissemination, overhead, and fault-tolerance

    A response to the call for evidence on 'Establishing a pro-innovation approach to regulating AI” on behalf of the Regulation and Functionality nodes of the UKRI Trustworthy Autonomous Systems Network (TAS)

    Get PDF
    We welcome the Government’s proposal to develop a new, coherent regulatory strategy for AI. While it maintains a sectoral focus, the development of cross-sector and cross-application principles and governance structures has the potential to create legal certainty, foster public acceptance, and facilitate responsible development of generic AI tools that are currently left unregulated. Our submission intends to discuss various aspects of the proposal, including: the design and enforcement of the regulatory framework; the context-driven and cross-sectoral principles’ approach, and the coordination between regulatory bodies for coherence and monitoring. Our submission will use medical device regulation for AI as enabled medical devices (AIaMD) as a sector-specific example to illustrate our recommendations. We have structured our response around the six questions in the consultation

    Incorporating ‘fashion identity’ into the right to privacy

    Get PDF
    corecore